Eliza or more properly, Eliza/Doctor is a ground breaking system created by computer science researcher Joseph Weizenbaum in 1960s. It performed natural language processing. It was named after the Eliza in Pygmalion. Driven by a script named DOCTOR, it was capable of engaging humans in a conversation which bore a striking resemblance to one with an empathic psychologist. Weizenbaum modeled its conversational style after Carl Rogers, who introduced the use of open-ended questions to encourage patients to communicate more effectively with therapists. The program applied pattern of matching rules to statements to figure out its replies. Weizenbaum was shocked that his program was taken seriously by many users, who would open their hearts to it. The ELIZA effect is the tendency to unconsciously assume that computer behaviors are analogous to human behaviors. A simple example of the specific form of the Eliza effect involves an automated teller machine which displays the words "THANK YOU" at the end of a transaction. A casual observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols. How does Eliza work? Eliza was experienced through something like a typewriter, allowing the computer to type to the user and the user to type back. People could type anything they wanted to Eliza, and the software would respond in different ways depending on the currently active script. Example Conversation: ELIZA: How do you do? Please tell me your problem. (After this point Eliza will not take the initiative again—only respond). User: Well, my boyfriend made me come here. ELIZA: Your boyfriend made you come here? User: He says I'm depressed much of the time. ELIZA: I am sorry to hear you are depressed. User: It's true. I'm unhappy. ELIZA: Do you think coming here will help you not to be unhappy? Analysis: 1- A session with Eliza begins with a greeting. For example, Doctor script starts with “How do you do. Please tell me your problem.” (After this Eliza will never take the initiative again—only responds). 2- Rephrasing the patient’s / user’s replies as questions to prompt a new answer and a to carry out the conversation. 3- It uses substitutions which switch all the first-person pronouns for second-person pronouns (“I” becomes “you” and vice versa). 4- Eliza scans the audience member typing looking for a key word that matches its script and responds accordingly. 5- Keywords have levels of importance. When the first keyword is founded in the text, it is added to a “keystack.” Each time a new keyword is found, it is compared with that of the highest-rank keyword just found. If the new word has a higher rank, it is placed at the top of the stack. 6-Transformation: indirect transformation: for example, a sentence like “Perhaps I could learn to get along with my mother” can be transformed as “Tell me more about your family.” 7- When no keyword is found, the program uses content-free words such as “Please go on,” or “I see.” Each decomposition rule can have a set of reassembly rules associated with it. Each statement by Eliza script is the result of the most recent audience statement. The transformation difficulty comes when no keywords are found in the most recent text. We called them content free phrases. In these cases Eliza works in a special structure called “Memory”. This structure works in combination with a particular keyword.When there is a keyword, a response is generated in the usual way, but when there are no keywords, the program uses the other reassembly that are created and stored in a queue. When the audience interacts with such program without knowing how it actually operates internally, they assume that the software must internally be complex; since the surface appearance of an interaction could resemble something like a coherent dialogue. This misunderstanding called the “Eliza effect”. Weizenbaum said in a book he wrote that the internals of the computers are not magical. And that we would disservice ourselves when assume that the human intelligence can be matched by computational machine. This moved him from being a computer scientist to one of the first knowledgeable critics to interrogate the cultures of computing and artificial science. Lucy Suchman published Plans and Situated Actions (1987), in which she sees Eliza as an iconic example in human-computer interaction of the broad phenomenon of treating systems as intelligent based on limited evidence. Suchman presents one of Garfinkel’s experiemnets as a proof that people tend to “take appearances as evidence for an underlying reality, while taking the reality as a resource for the interpretation of the appearance” Yet, the three of Weizenbaum, Suchman and Murray does not go far into talking about ongoing interaction with Eliza, because it would complicate their discussions. To some extent Eliza succeeded because it plays on the interpretative expectations brought to each interaction by audience members. Eliza effect can be shielded from breakdown by severely restricting interaction. The experiment allowed the subjects to maintain the illusion that something much more complex was going on inside the system and answering questions thoughtfully, rather than random yes/no answers, as the scope of possible responses was so limited. When breakdown occurs the audience can then begin to develop a model of the processes and from the shape of the breakdown they begin to understand the processes of the system and then employ that knowledge to help maintain or further compromise the illusion. Some authors such as Jeremy Douglass assert that breakdown can be interesting mode for digital fictions. And certainly breakdowns can be fascinating.
Discussion Questions: 1- Why do you think that people communicate more freely with a program like ELIZA Doctor than they do with a human being? 2- Can you locate examples of ELIZA Effect in our everyday life? 3- To what extent do you think computer therapy is effective?
The Eliza Effect
Hamzeh
Ashraf
Eliza or more properly, Eliza/Doctor is a ground breaking system created by computer science researcher Joseph Weizenbaum in 1960s. It performed natural language processing. It was named after the Eliza in Pygmalion. Driven by a script named DOCTOR, it was capable of engaging humans in a conversation which bore a striking resemblance to one with an empathic psychologist. Weizenbaum modeled its conversational style after Carl Rogers, who introduced the use of open-ended questions to encourage patients to communicate more effectively with therapists.
The program applied pattern of matching rules to statements to figure out its replies. Weizenbaum was shocked that his program was taken seriously by many users, who would open their hearts to it.
The ELIZA effect is the tendency to unconsciously assume that computer behaviors are analogous to human behaviors. A simple example of the specific form of the Eliza effect involves an automated teller machine which displays the words "THANK YOU" at the end of a transaction. A casual observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.
How does Eliza work?
Eliza was experienced through something like a typewriter, allowing the computer to type to the user and the user to type back. People could type anything they wanted to
Eliza, and the software would respond in different ways depending on the currently active script.
Example Conversation:
ELIZA: How do you do? Please tell me
your problem.
(After this point Eliza will not take the initiative again—only respond).
User: Well, my boyfriend made me come here.
ELIZA: Your boyfriend made you come here?
User: He says I'm depressed much of the time.
ELIZA: I am sorry to hear you are depressed.
User: It's true. I'm unhappy.
ELIZA: Do you think coming here will help you not to be unhappy?
Analysis:
1- A session with Eliza begins with a greeting. For example, Doctor script starts with “How do you do. Please tell me your problem.” (After this Eliza will never take the initiative again—only responds).
2- Rephrasing the patient’s / user’s replies as questions to prompt a new answer and a to carry out the conversation.
3- It uses substitutions which switch all the first-person pronouns for second-person pronouns (“I” becomes “you” and vice versa).
4- Eliza scans the audience member typing looking for a key word that matches its script and responds accordingly.
5- Keywords have levels of importance. When the first keyword is founded in the text, it is added to a “keystack.” Each time a new keyword is found, it is compared with that of the highest-rank keyword just found. If the new word has a higher rank, it is placed at the top of the stack.
6- Transformation: indirect transformation: for example, a sentence like “Perhaps I could learn to get along with my mother” can be transformed as “Tell me more about your family.”
7- When no keyword is found, the program uses content-free words such as “Please go on,” or “I see.”
Each decomposition rule can have a set of reassembly rules associated with it. Each statement by Eliza script is the result of the most recent audience statement. The transformation difficulty comes when no keywords are found in the most recent text. We called them content free phrases. In these cases Eliza works in a special structure called “Memory”.
This structure works in combination with a particular keyword.When there is a keyword, a response is generated in the usual way, but when there are no keywords, the program uses the other reassembly that are created and stored in a queue.
When the audience interacts with such program without knowing how it actually operates internally, they assume that the software must internally be complex; since the surface appearance of an interaction could resemble something like a coherent dialogue. This misunderstanding called the “Eliza effect”.
Weizenbaum said in a book he wrote that the internals of the computers are not magical. And that we would disservice ourselves when assume that the human intelligence can be matched by computational machine. This moved him from being a computer scientist to one of the first knowledgeable critics to interrogate the cultures of computing and artificial science.
Lucy Suchman published Plans and Situated Actions (1987), in which she sees Eliza as an iconic example in human-computer interaction of the broad phenomenon of treating systems as intelligent based on limited evidence.
Suchman presents one of Garfinkel’s experiemnets as a proof that people tend to “take appearances as evidence for an underlying reality, while taking the reality as a resource for the interpretation of the appearance”
Yet, the three of Weizenbaum, Suchman and Murray does not go far into talking about ongoing interaction with Eliza, because it would complicate their discussions. To some extent Eliza succeeded because it plays on the interpretative expectations brought to each interaction by audience members.
Eliza effect can be shielded from breakdown by severely restricting interaction. The experiment allowed the subjects to maintain the illusion that something much more complex was going on inside the system and answering questions thoughtfully, rather than random yes/no answers, as the scope of possible responses was so limited.
When breakdown occurs the audience can then begin to develop a model of the processes and from the shape of the breakdown they begin to understand the processes of the system and then employ that knowledge to help maintain or further compromise the illusion. Some authors such as Jeremy Douglass assert that breakdown can be interesting mode for digital fictions. And certainly breakdowns can be fascinating.
Discussion Questions:
1- Why do you think that people communicate more freely with a program like ELIZA Doctor than they do with a human being?
2- Can you locate examples of ELIZA Effect in our everyday life?
3- To what extent do you think computer therapy is effective?